103 research outputs found
Runtime Analysis of Quality Diversity Algorithms
Quality diversity~(QD) is a branch of evolutionary computation that gained
increasing interest in recent years. The Map-Elites QD approach defines a
feature space, i.e., a partition of the search space, and stores the best
solution for each cell of this space. We study a simple QD algorithm in the
context of pseudo-Boolean optimisation on the ``number of ones'' feature space,
where the th cell stores the best solution amongst those with a number of
ones in . Here is a granularity parameter . We give a tight bound on the expected time until all cells are covered
for arbitrary fitness functions and for all and analyse the expected
optimisation time of QD on \textsc{OneMax} and other problems whose structure
aligns favourably with the feature space. On combinatorial problems we show
that QD finds a -approximation when maximising any monotone
sub-modular function with a single uniform cardinality constraint efficiently.
Defining the feature space as the number of connected components of a connected
graph, we show that QD finds a minimum spanning tree in expected polynomial
time
First Steps Towards a Runtime Comparison of Natural and Artificial Evolution
Evolutionary algorithms (EAs) form a popular optimisation paradigm inspired
by natural evolution. In recent years the field of evolutionary computation has
developed a rigorous analytical theory to analyse their runtime on many
illustrative problems. Here we apply this theory to a simple model of natural
evolution. In the Strong Selection Weak Mutation (SSWM) evolutionary regime the
time between occurrence of new mutations is much longer than the time it takes
for a new beneficial mutation to take over the population. In this situation,
the population only contains copies of one genotype and evolution can be
modelled as a (1+1)-type process where the probability of accepting a new
genotype (improvements or worsenings) depends on the change in fitness.
We present an initial runtime analysis of SSWM, quantifying its performance
for various parameters and investigating differences to the (1+1)EA. We show
that SSWM can have a moderate advantage over the (1+1)EA at crossing fitness
valleys and study an example where SSWM outperforms the (1+1)EA by taking
advantage of information on the fitness gradient
Self-adjusting Population Sizes for Non-elitist Evolutionary Algorithms:Why Success Rates Matter
Evolutionary algorithms (EAs) are general-purpose optimisers that come with several
parameters like the sizes of parent and offspring populations or the mutation rate. It is
well known that the performance of EAs may depend drastically on these parameters.
Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best
static parameters in EAs on discrete problems. However, the majority of these studies
concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter
control mechanisms, the one-fifth success rule, to control the offspring population
size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold
with respect to the choice of λ where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear
whether parameter control mechanisms are able to find and maintain suitable values
of λ. For OneMax we show that the answer crucially depends on the success rate s
(i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately
small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations
and O(n log n) expected evaluations, the best possible runtime for any unary unbiased
black-box algorithm. A small success rate is crucial: we also show that if the success
rate is too large, the algorithm has an exponential runtime on OneMax and other
functions with similar characteristics
- …